Human Question Answering Performance using an Interactive Information Retrieval System
نویسندگان
چکیده
Every day, people widely use information retrieval (IR) systems to find documents that answer their questions. Compared to these IR systems, question answering (QA) systems aim to speed the rate at which users find answers by retrieving answers rather than documents. To better understand how IR systems compare to QA systems, we measured the performance of humans using an interactive IR system to answer questions. We conducted our experiments within the framework of the TREC 2007 complex, interactive question answering (ciQA) track. We found that the average QA system was comparable to humans using an IR system. Our results also show that for some users IR systems can be powerful question answering systems. After only 5 minutes of usage per question, one user of the IR system obtained an average F (β = 3) score of 0.800, which outperformed the best QA system by 27% and the average QA system by 40%. After 10 minutes of usage, 5 of 8 users of the IR system obtained a higher performance than the average QA system. To achieve superior performance, future QA systems should combine the flexibility and precision of IR systems with the ease-of-use and recall advantages of QA systems.
منابع مشابه
Boosting Passage Retrieval through Reuse in Question Answering
Question Answering (QA) is an emerging important field in Information Retrieval. In a QA system the archive of previous questions asked from the system makes a collection full of useful factual nuggets. This paper makes an initial attempt to investigate the reuse of facts contained in the archive of previous questions to help and gain performance in answering future related factoid questions. I...
متن کاملUMass Complex Interactive Question Answering (ciQA) 2007: Human Performance as Question Answerers
Every day, people widely use information retrieval (IR) systems to answer their questions. We utilized the TREC 2007 complex, interactive question answering (ciQA) track to measure the performance of humans using an interactive IR system to answer questions. Using our IR system, assessors searched for relevant documents and recorded answers to their questions. We submitted the assessors’ answer...
متن کاملA New Statistical Model for Evaluation Interactive Question Answering Systems Using Regression
The development of computer systems and extensive use of information technology in the everyday life of people have just made it more and more important for them to make quick access to information that has received great importance. Increasing the volume of information makes it difficult to manage or control. Thus, some instruments need to be provided to use this information. The QA system is ...
متن کاملAn Evaluation of Spoken and Textual Interaction in the RITEL Interactive Question Answering System
The RITEL project aims to integrate a spoken language dialogue system and an open-domain information retrieval system in order to enable human users to ask a general question and to refine their search for information interactively. This type of system is often referred to as an Interactive Question Answering (IQA) system. In this paper, we present an evaluation of how the performance of the RI...
متن کاملInvestigating Embedded Question Reuse in Question Answering
The investigation presented in this paper is a novel method in question answering (QA) that enables a QA system to gain performance through reuse of information in the answer to one question to answer another related question. Our analysis shows that a pair of question in a general open domain QA can have embedding relation through their mentions of noun phrase expressions. We present methods f...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2008